perm filename REVIEW.F80[206,LSP] blob
sn#545688 filedate 1980-11-13 generic text, type C, neo UTF8
COMMENT ⊗ VALID 00014 PAGES
C REC PAGE DESCRIPTION
C00001 00001
C00003 00002 A SUPPLEMENT TO LISP: PROGRAMMING AND PROVING
C00009 00003 WRITING PURE LISP PROGRAMS
C00013 00004 * Help functions. Allow error checking, add on new arguments Decompose
C00017 00005 supercar[u] ←
C00019 00006 PROVING PROPERTIES OF PROGRAMS
C00024 00007 * the precondition said so. If you are trying to prove something of the form
C00028 00008 E.g. List induction on u is effectively rank induction on LENGTH[u].)
C00033 00009 Cookbook
C00036 00010 Example Proving an invariance property by S-EXP recursion
C00040 00011 Inductive hypothesis:
C00043 00012 Example Termination proof by simple list induction.
C00045 00013 A SUPPLEMENT TO LISP: PROGRAMMING AND PROVING: Errors and additions
C00049 00014 The definition of ISLIST should read: *****PAGE 11
C00053 ENDMK
C⊗;
A SUPPLEMENT TO LISP: PROGRAMMING AND PROVING
---------------------------------------------
CS 206 (Lisp)
Thursday 11 November 1980
Professor: John McCarthy
TA: Scott Kim
The purpose of this little paper is to provide some of the practical intuition
that is useful in working with Lisp but is not provided by the textbook. I will
summarize notational conventions by giving examples, highlight important and/or
confusing points, suggest standard cookbook approaches, walk through examples.
Things to know
--------------
* The style of this class is largely theoretical. This is NOT your typical
cookbook programming course. There are three main threads through the course:
(1) Pure Lisp. Atoms, lists, s-exps, CAR, CDR, CONS, COND, EVAL, LAMBDA.
Various predicates, arithmetic, funarg problem, variable binding.
(2) Lisp treated as a mathematical language in the tradition of
recursive function theory. Worries about proving properties of programs,
working clearnly with partial (perhaps nonterminating). Expects you to absorb a
rather large chunk of first order logic and proof technique. The theory of Lisp
is in its infancy. Enough impressive examples have been accumulated to give it
a name, but it is not yet a well-oiled machine. Problems of metalanguage vs.
object language and nontermination (bottom) get very interesting very fast.
(3) Real live Lisp. Circularly linked lists, RPLACA, program loops,
PROG, file i/o, fighting with LOTS, self-confidence, completing a term
project, wrestling with documentation (or lack thereof), doing homework.
* Notations: internal, external, boxes and arrows.
* s-expressions: numbers, atoms, lists, trees, circularly linked.
* Everything in Lisp is an s-expression. Even programs! Even the compiler!
* Lisp is interactive: evaluates anything it sees. If you don't want
it to evaluate something, quote it.
* Lisp is used in AI because it has dynamic data structures and can
refer to programs.
* Everything is redefinable, even system functions like CAR, but don't
tell anyone I said so.
* Function calls in Lisp have the form (F ARG1 ARG2 ARG3...), which
corresponds with the mathematical notation F[ARG1,ARG2,ARG3...].
Note that the function name goes INSIDE the parentheses.
* LAMBDA expressions can be used in place of function names, sort of like macros.
* Passing function names as arguments gets confusing, since we have
to keep track of how many times things get evaluated. If you want
to call a variable function, as in (DEFUN P (F X) (F X)), try
instead (DEFUN P (F X) (FUNCALL F X))
* Lisp functions tend to be recursive. Tail recursion can be used to
simulate iteration, but Pure Lisp has no loops. As they say, to
iterate is human, to recurse divine.
* EQ is stronger than EQUAL.
* COND is like IF-THEN-ELSE-IF-THEN-ELSE-IF-THEN-ELSE...
* Be sure the last clause in your COND starts with (T... ,
otherwise you may fall off the end of the world.
* Better names for CAR and CDR applied to lists are FIRST and REST.
* Better names for CAR and CDR applied to s-exps are LEFT AND RIGHT.
* Watch your CONSes and APPENDs, and be sure to LISTify the right number
of times. Even an experienced Lisper gets these wrong frequently.
WRITING PURE LISP PROGRAMS
--------------------------
Notation
--------
External notation Internal notation
----------------- -----------------
'a (QUOTE a) or 'a
= EQ or EQUAL
f x (f x)
f[x,y,z] (f x y z)
A '(a b c d) (CAR '(a b c d))
D '(a b c d) (CDR '(a b c d))
'a . '(b c d) (CONS 'a '(b c d))
ADD '(a b c d) (CADDR '(a b c d))
= (CAR (CDR (CDR '(a b c d))))
IF a THEN b ELSE (COND (a b)
IF c THEN d ELSE e (c d) (T e))
[λu v.(e)] x ((LAMBDA (u v) (e)) x)
N AT NULL ATOM
<x> (LIST x)
← DEFUN
true false T NIL
Precedence:
Functions of a single argument bind tightest, e.g. A D
∧ binds tighter than ∨
Typed variables:
u v w Lists
uu vv ww Nonnull lists
x y z S-exps
xx yy zz Nonnull s-exps
The external notation for Lisp is more convenient than the internal for
some purposes, such as proving properties of programs, and looks more like
mathematical notation. Several slightly unusual conventions: in this paper,
I capitalize reserved words like IF-THEN-ELSE rather than underlining them.
functions of one variable do NOT need parentheses around their argument.
Furthermore, () and [] are interchangable. Example: A[x] = A x = (A x).
Cookbook
--------
* First make sure you know what the function is to do.
* Consider nontrivial examples.
* Consider trivial examples. Be especially sure to consider the empty case
and cases which cause errors.
* Think recursively. First program the termination condition.
The recursive definition of walking is first see if you've arrived otherwise
take a step and call yourself.
* Express the problem in terms of smaller functions.
Don't worry about
* Passed variables Sometimes the value to be returned can be remembered in a
dummy variable which is passed down.
* Help functions. Allow error checking, add on new arguments Decompose
problem into managable top-down chunks. Think recursively!
* Mapping functions
* Error conditions can be handled different ways or ignored.
* Alternate approaches Boolean vs. Conditional
* Check to make sure you included all preconditions. You must check ¬(AT x)
before you can try taking (A x) or (D x).
* NIL is both an atom and a list.
* Divide and conquer: consider splitting into cases.
* Booleans and everything else are evaluated left to right.
* Debugging
-------------------------------------------------------------------------
Example
-------
Program the function group[u], which performs the following action:
group '(a a b c c c) returns '((a a) (b) (c c c))
group '(a a a b a) returns '((a a a) (b) (a))
Straightforward efficient brute force list recursion approach
-------------------------------------------------------------
Analysis: suppose we know group[D u]. There are two cases to consider.
Either A u is the same or different from A group[D u]. So all we have to
do is figure out how to stick A u
A u group[D u] Desired group[u]
--- ---------- ----------------
Case 1: 'a '((a) (b) (c c c)) '((a a) (b) (c c c))
Case 2: 'a '((b) (c c c)) '((a) (b) (c c c))
Hmmmmm. Simple consing won't work; we'll probably have to take the list
apart a little then put it back together.
group[u] ←
IF N u THEN NIL Empty case comes first.
ELSE IF N D u THEN <u> We had to go back to add this condition.
ELSE IF A u = AD u First case: first two elements same.
THEN [A u . A group[D u]] This form is elusive but straightforward.
. D group[D u] More efficient would be
(λv.[[A u . A v] . D v]) group[D u]
ELSE <A u> . group[D u] Second case: first two elements different.
Top-down elegant on the outside inefficient on the inside approach
------------------------------------------------------------------
Analysis: let's grab the first string of identical atoms, then worry about
the rest.
group[u] ←
supercar[u] . group[supercdr[u]]
supercar[u] ←
IF N u THEN NIL
ELSE IF N D u THEN <u>
ELSE IF A u = AD u
THEN A u . supercar[D u]
ELSE <A u>
supercdr[u] ←
IF N u ∨ N D u THEN NIL
ELSE IF A u = AD u
THEN supercdr[D u]
ELSE D u
Auxiliary variable in auxiliary function often more efficient approach
----------------------------------------------------------------------
Analysis: let's make an assembly line by making up new variables to hold
the results of each step in the process. u = raw input. w = current
subgroup of identical atoms. Note: if you try this approach, PLEASE
document what the various arguments are supposed to do.
group[u] ←
IF N u THEN NIL ELSE group1[u,NIL]
group1[u,w] ←
IF N u THEN <w>
ELSE IF A u = AD u
THEN group[D u,A u . w]
ELSE w . group[D u,<A u>]
Subtle point: note that the atoms in w get stacked up backwards. It doesn't
matter in this case, since all the atoms are alike, but in similar cases in
other functions, we might have to take REVERSE or figure out how to stack
things the other way.
PROVING PROPERTIES OF PROGRAMS
------------------------------
Notation
--------
First order logic Informal notation
----------------- -----------------
∧ ∨ ¬ ⊃ ≡ and or not implies equivalent
∀a b.(e) for all a and b, it is the case that e
BOTTOM The value returned by a nonterminating program
Written out AND OR Lisp AND and OR (might return BOTTOM)
∧ ∨ Return logical values (known to terminate)
Precedence:
∧ binds tighter than ∨
Motivation: just because you've written a program doesn't mean you know
anything about what it will do. Proofs are better than debugging because
you know when you're done -- you've isolated the uncertainty. If you're
mathematically minded, proving properties of Lisp programs raises all sorts
of interesting proof-theoretical questions, especially handling
nontermination.
Cookbook
--------
* Choose variable to induce on. If symmetrical, choose the first variable --
everything in Lisp works left to right.
* Hopefully simple list or s-exp induction suffices and you don't need
a rank function.
* Maybe the proof is direct. Common example: if we are to show that some
recursive function definition is equivalent to some nonrecursive form,
the thing to do is directly substitute the functional equation and
reduce the expression till it comes out correct, all without using induction.
* Substituting into the functional equation does NOT, however, guaruntee
termination. It says only that IF the open form terminates, it returns
the same value as the closed form.
* It may be easier to prove on an alternate form, then prove equivalence.
* The form of an inductive proof is given in a couple of pages.
Basic idea: discharge base case, then given the inductive hypothesis,
prove the inductive conclusion.
This style of proof is similar to that of trigonometry identities: a
series of successively simpler expressions, each equivalent to and
incrementally simpler than the previous, working towards some final form.
Common sorts of reasons which can be given for proof steps include
* substituting definitions
* evaluating Lisp primitives like IF-THEN-ELSE or CAR,CDR,CONS
* factoring a conditional: f(IF a THEN b ELSE c) becomes IF a THEN f(b) ELSE f(c).
* using axioms on Lisp primitives or domain mapping information, e.g. ¬N (a.b)
See section 3.10 in McCarthy
* using lemmas on simple functions like REVERSE, APPEND, LENGTH (there are lots
of these in the book. Combinations like REVERSE of APPEND are especially
powerful in manipulating formulas.
* the inductive hypothesis said so. If you are in an inductive proof,
always be on the lookout for places to use the inductive hypothesis.
In fact, you can often predict in advance how the hypothesis will be used,
making that step your subgoal.
* the precondition said so. If you are trying to prove something of the form
∀x.(precondition(x) ⊃ property(x))
then go ahead and start rewriting property(x), calling in precondition(x)
only when you need it. We are proving the implication by saying that
property(x) is true based on precondition(x).
* type information said so. You may assume that, for instance, ISLIST(u).
Sometimes you may want to expand the definition of a domain such as ISLIST.
* TRICKS. Everything mentioned so far is rather straightforward, except maybe
remembering all those lemmas. Sometimes in order to get your expression
in a form which can be manipulated, you need to take a backward step,
making an expression larger before making it smaller. A common example
is expressing u as <A u>*[D u] so that you can make use of facts about
append. There is no firm rule about when to use what trick, but
if you are stuck, it is helpful to look around for lemmas that might
be helpful then try to get your expression in that form.
QUESTION: how much do I write down? Answer: it's all right to collapse
one or two steps, but otherwise be rather specific. You don't need to
say everthing, however. Example: if you want to reduce
IF ¬N (a . b) THEN c ELSE d
to just d, it is more helpful to say that you used a fact about cons
(namely cons maps onto only non-null s-exps) rather than saying you noticed
that ¬N (a . b).
Common proof forms (induction schemas) include
* (Natural) number induction. (Induction on < )
Note: in this and other forms of induction it is legal to choose a slightly
different base case, e.g. number induction whose basis is 1 instead of 0.
PHI(0) ∧ ∀n nn.[n<nn∧PHI(n)⊃PHI(nn)]
⊃ ∀n.PHI(n)
* List induction (induction on CDR of a list).
∀x.[AT x⊃PHI(x)] ∧ ∀xx.[PHI(A xx)∧PHI(D xx)⊃PHI(xx)]
⊃ ∀x.PHI(x)
* S-expression induction (induction on left and right branches of a tree).
∀u.PHI(NIL) ∧ ∀uu.[PHI(D u)⊃PHI(u)]
⊃ ∀u.PHI(u)
* Rank function induction (induction on some arbitrary function on the
arguments of the function which is reduced every time the function is called.
Rank gives us a way to compare the "definedness" of two functions.
Rank induction is the most general form of induction. The general idea
is to turn any other sort of induction into natural number induction.
E.g. List induction on u is effectively rank induction on LENGTH[u].)
Rank induction is useful when the function does not display any obvious
monotonically decreasing behavior, e.g. when one or another argument
fluctuate but the arguments as a whole keep getting smaller.
Let f be a function on a domain D. Let variables d,d1 range over D.
We are trying to show that some property PHI(d) holds for all d.
RANK(d) is defined to return an integer usable in the following
proof form:
∀d. [∀d1.[RANK(d1)<RANK(d)⊃RANK(d1)] ⊃ PHI(d)]
⊃ ∀d.PHI(d)
Note: the truth of these schemas is axiomatic -- they are NOT true for
nonstandard s-expressions such as circularly linked lists.
-------------------------------------------------------------------------
PROVING TERMINATION
-------------------
To show that a recursive function f terminates, we want to use some sort of
inductive argument which says that f is defined in terms of expressions
which are in some sense smaller. Then using some well-foundedness axiom, we
would conclude that f would eventually wind down to a non-recursive case.
For instance, the definition of length[u] has two parts.
length[u] ← IF N u THEN 0 (1)
ELSE 1+length[D u] (2)
The first part returns the value 0, which terminates immediately. The
second part a bit harder. Our intuition says that length CDRs down list
u--each time length calls itself, it does so on a smaller u. And since we
know that this process eventually reduces u to NIL, our function must
terminate.
NOTE: this would NOT be true if allowed nonstandard, e.g. circularly
linked, lists! The induction schema is necessary!
Now, how do you prove this formally? You might think that the way to prove
termination would be to prove
∀u. (length[u] ≠ BOTTOM)
In actuality, the only way to express termination in our formal notation
is to prove something stronger, like ISNUMBER[length[u]] or
ISLIST[REVERSE[u]] or REVERSE[REVERSE[u]] -- if the function returns a
value with a characterizable property, then it returns some value at all.
This may strike you as an overly strong, inappropriate approach, but
that's the best we can do in our notation. Once you realize that,
termination proofs are identical in structure to proofs of other
properties of functions. In fact, a proof of almost any property of a
function on general arguments implicitly proves termination -- you may not
have to do any work at all.
MORAL: Termination proofs are no different in form than other proofs about
functions! Actually true!
Cookbook
--------
Prove something about the form of the value returned by the function when
applied to general arguments; for instance ISLIST[f[x]] or ISSEXP[f[x]].
The proof may depend on assuming that the arguments are of a particular
sort. For instance, ISLIST[reverse[u]] depends on the fact ISLIST[u],
which may be assumed on the basis that u is of type LIST.
Prove a property involving the function, such as an invariance property,
perhaps involving another function. Again, the proof must be on general
arguments -- a proof that u*NIL=u would NOT constitute a proof that append
terminates.
In complicated cases, simple LIST or S-EXP recursion may not work. You
may need a rank function, which returns an integer given the arguments. A
successful rank function returns a strictly smaller value each time the
function calls itself.
-------------------------------------------------------------------------
General form of an inductive proof
----------------------------------
Prove: some theorem, using some method (probably induction on some variable).
-----
Defs: Definitions of all relevant functions, including type functions
---- like ISLIST.
Inductive hypothesis
--------------------
PHI[induction variable] ← property to be proved
This says which variable the induction is on.
Base case
---------
Expression
= simpler form Reason. Typical reasons are expanding
= still simpler form a definition, using type information,
= even simpler form factoring a conditional, using simple
= yet simpler form identities on CAR,CDR,CONS,APPEND,
... using a lemma, or arguing by analogy
= what you wanted to show with a proof of the same form.
Inductive case
--------------
Another chain of identities. The base case is usually much simpler than
the inductive case.
Lemmas if any...
------
Example Proving an invariance property by S-EXP recursion
-------
This is a rather thorny example which was worked out in class on Thursday
Nov. 6, 1980. The complications stem from the various preconditions which
must be attached to make sure that the arguments are of the proper form.
In particular, we must assume that x is somewhere in y (x≤y) before we can
call point[x,y] with confidence.
Notice that it was very important
The core of the proof is easy to predict
from the form of the definition of point: we collapse an expression of the
form
D 'A.point[x,A y]
or A 'D.point[x,D y]
by applying the CAR or CDR and using the inductive hypothesis.
As someone else commented in class, it was reassuring to know that
blundering ahead really was the right approach. Several times during the
proof, McCarthy had to add new clauses to his inductive hypothesis. As
McCarthy commented, conditions you leave out initially will come back to
haunt you later. McCarthy's approach was not really any different from
that of anyone else, except that it showed less fear and more flexibility.
Weirdnesses: besides revising his inductive hypothesis, McCarthy modified
the form of the external notation for lambda expressions to
{args}[λvars.expression...]
with the arguments BEFORE the function. Not an important point -- he's
just experimenting with notation. I decided to leave these anomolies
intact so that you could get some feeling for how McCarthy approaches a
problem.
-------------------------------------------------------------------------
Prove: ∀x y.[x=get[y,point[x,y]]]
Immediate problem: the theorem as stated is not true! We need to add a
precondition to insure that point returns a value.
Prove: ∀x y.[x≤y ⊃ x=get[y,point[x,y]]]
-----
Defs: get[y,p] ←
---- IF N p THEN y
ELSE IF A p='A THEN get[A y,D p]
else get[D y,D p]
point[x,y] ←
IF x equal y THEN NIL
ELSE IF AT y THEN 'NO
ELSE {point[x,a,y]}[λu:IF u='NO THEN
{point[x,D y]}
[λv: IF v=`NO THEN 'NO ELSE D v]]
ELSE 'A . u]
x≤y ← x=y ∨ ¬AT y∧[x≤A y ∨ x≤D y]
Inductive hypothesis:
PHI(y) ≡ x≤y ⊃ x=get[y,point[x,y]]
∀y.[AT y⊃PHI(y)] ∧ ∀y.[¬AT y∧PHI(A y)∧PHI(D y)⊃PHI(y)]
⊃ ∀y.PHI(y)
Base case (assume AT y):
Wff Reason
get[y,point[x,y]]
= get[x,point[x,x] x=y follows from the def. of ≤
= get[x,IF x=y THEN NIL ELSE..] Def of point
= get[x,NIL] Since x=y
= IF N NIL THEN x ELSE ... Def of get
= x
Inductive case (assume ¬AT y ∧ PHI(A y) ∧ PHI(D y)):
get[y,point[x,y]]
= get[y,IF x=y THEN NIL ELSE Def of point
IF AT y THEN 'NO ELSE ...
We are in the third case since we
know x≠y and ¬(AT y).
= get[y,{point[x,A y]}
[λu.IF u='NO THEN
{point[x,D y]}
[λv.IF λ='NO THEN get[y,'NO]
ELSE get[y,'D.v]]
ELSE get[y,'A.v]]
We now have two cases. Since x≠y we
have from the definition of ≤ that
either (x≤A y), or (x≤D y).
Case 1. Assume x≤A y.
We have to make sure point returns a value so that u≠'NO, so redefine
PHI(y) ≡ x≤y ⊃ point[x,y]≠'NO ∧ x=get[y,point[x,y]]
= get[y,'A.point[x,A y]] Evaluating IF-THEN-ELSE
= get[A y,D 'A.point[x,A y]] Expanding get. Note that ptr≠NIL.
= get[A y,point[x,A y]] Evaluating CDR of CAR.
Case 2. Assume x≤D y.
= get[y,'D.point[x,D y]] Expanding get. Note that ptr≠NIL.
= get[D y,A 'D.point[x,D y]] Evaluating CDR of CAR.
= x By inductive hypothesis on A y.
Example Termination proof by simple list induction.
-------
Prove: Termination of append.
----- We will show ISLIST[append[u,v]] by inducting on u.
Defs: append[u,v] ← IF N u THEN v
---- ELSE A u . append[D u,v]
ISLIST[x] ← ¬AT x ∧ (N x ∨ ISLIST[D x])
PHI[u] ← ISLIST[append[u,v]]
Base case:
---------
ISLIST[append[NIL,v]]
= ISLIST[IF N NIL THEN v ELSE ...] Expanding def of append.
= ISLIST v Evaluating IF-THEN
= T By sort of v
Inductive case (assume ISLIST[D uu,v]):
--------------
ISLIST[append[uu,v]]
= ISLIST[A u . append[D u,v]] Append def, uu is not null.
= ISLIST[D [A u . append[D u,v]] Islist def, arg is not an atom.
= ISLIST[append[D u,v]] Def of CONS
= T Inductive hypothesis
-------------------------------------------------------------------------
Bibliography
------------
Some other books you may want to look through:
The Little Lisper by Dan Friedman Down-to-earth/simpleminded
The Anatomy of Lisp by John Allen Probing/detailed
Lisp 1.6 Manual by ... Your basic textbook/not deep
Lisp by Patrick Winston
A SUPPLEMENT TO LISP: PROGRAMMING AND PROVING: Errors and additions
CS 206 (Lisp) / Thurs. 13 Nov. 1980 / Professor John McCarthy / TA: Scott Kim
The definition of group1 should read *****PAGE 4
group1[u,w] ←
IF N u THEN <w>
ELSE IF N D u THEN <A u . w>
ELSE IF A u = AD u
THEN group1[D u,A u . w]
ELSE (A u . w) . group1[D u,NIL]
More suggestions for proving properties of programs: *****PAGE 5
* Look for a way to use previously proved results.
* Strengthen the inductive hypothesis. Sometimes it is necessary to prove
something stronger than what we originally set out to prove. This is
especially true in the case of a function which is defined in terms of
another function which has an extra argument which is normally called
with an initial value of NIL. For instance, the proof in the book that
rev1[u,NIL] = reverse[u]
is treated as a special case of
rev1[u,v] = reverse[u]*v
In general, the inductive hypothesis should carry all the assumptions
you would like to use along the way.
The definitions of List induction and S-exp induction *****PAGE 6
should be interchanged.
The rank induction schema should read *****PAGE 7
∀d. [∀d1.[RANK(d1)<RANK(d)⊃PHI(d1)] ⊃ PHI(d)]
---
Also add the following comment.
Note that PHI(base case) follows directly from the inductive
hypothesis, since ∀d1.[RANK(d1)<RANK(d)⊃PHI(d1)] is vacuously true.
Add new recipe to the proof of termination cookbook: *****PAGE 8
Termination on form. Sometimes it is easy to see that a function
will terminate by examining its form. For instance, if the function
is defined in terms of a call on itself in which one of the
arguments is reduced by 1. This argument serves as a sort of counter
which will eventually bottom out at 0--assuming our function definition
begins with a test for 0. CAR and CDR can also serve as counters.
In general, any function which is primitive recursive terminates
on form. The form of a primitive recursive function is described
on pages 33-36.
In the inductive case we need two proofs before we know we are *****PAGE 10
in the third case of the IF-THEN-ELSE: the case where x=y,
and the lemma ¬(x≤y) ⊃ point[x,y]='NO.
The definition of ISLIST should read: *****PAGE 11
ISLIST[x] ← N x ∨ [¬AT x ∧ ISLIST(D x)]
Add on: *****PAGE 12
Example Same termination proof but by rank induction.
-------
Prove: Termination of append.
----- We will show ISLIST[append[u,v]] by rank induction on u.
Defs: append[u,v] ← IF N u THEN v
---- ELSE A u . append[D u,v]
ISLIST[x] ← ¬AT x ∧ (N x ∨ ISLIST[D x])
length[u] ← IF N u THEN 0 ELSE 1+length[D u]
PHI[u] ← ISLIST[append[u,v]]
RANK[u,v] ← length[u]
Inductive principle:
∀u. [∀u1.[length(u1)<length(u)⊃ISLIST(append[u1,v])] ⊃ ISLIST(append[u,v])]
⊃ ∀u.PHI(d)
Inductive hypothesis:
∀u1.[length(u1)<length(u) ⊃ ISLIST(append[u1,v])]
Inductive conclusion:
ISLIST(append[u,v])
Proof
-----
ISLIST(append[u,v])
Base case Assume RANK(u,v)=0. By definition of rank, length(u)=0.
---------
= ISLIST(append[NIL,v]) Lemma: length=0 ⊃ N u
= ISLIST(v)
= T
Inductive case Assume RANK(u,v)>0. By definition of rank, length(u)>0.
--------------
= ISLIST(A u . append[D u,v]) Lemma: length>0 ⊃ ¬(N u)
= ISLIST( append[D u,v]) Def of ISLIST
= T Inductive hypothesis, plus
lemma that length[D u]<length[u]
This example illustrates the form of a proof by rank induction. In this
case it is easier to use list induction. Rank induction is often useful if
some of the arguments serve as stacks which store up values to return.
Notice that in the base case we are able to prove the inductive conclusion
without using the inductive hypothesis at all. The lemma on length is basic
enough that we do not to give an explicit proof. It was noted in one of the
problem sessions that we don't need the written out definition of ISLIST --
domain mapping information is sufficient to tell us what we need to know.